В ходе выполнения ВКР требуется сделать:
1) Изучить теоретические основы и методы решения поставленной задачи.
2) Провести разведочный анализ предложенных данных. Необходимо нарисовать гистограммы распределения каждой из переменной, диаграммы ящика с усами, попарные графики рассеяния точек. Необходимо также для каждой колонки получить среднее, медианное значение, провести анализ и исключение выбросов, проверить наличие пропусков.
3) Провести предобработку данных (удаление шумов, нормализация и т.д.).
4) Обучить нескольких моделей для прогноза модуля упругости при растяжении и прочности при растяжении. При построении модели необходимо 30% данных оставить на тестирование модели, на остальных происходит обучение моделей. При построении моделей провести поиск гиперпараметров модели с помощью поиска по сетке с перекрестной проверкой, количество блоков равно 10.
5) Написать нейронную сеть, которая будет рекомендовать соотношение матрица-наполнитель.
6)Разработать приложение с графическим интерфейсом или интерфейсом командной строки, которое будет выдавать прогноз, полученный в задании 4 или 5 (один или два прогноза, на выбор учащегося).
7)Оценить точность модели на тренировочном и тестовом датасете.
8) Создать репозиторий в GitHub / GitLab и разместить там код исследования. Оформить файл README.
Композиционные материалы — это искусственно созданные материалы, состоящие из нескольких других с четкой границей между ними. Композиты обладают теми свойствами, которые не наблюдаются у компонентов по отдельности. При этом композиты являются монолитным материалом, т. е. компоненты материала неотделимы друг от друга без разрушения конструкции в целом. Яркий пример композита - железобетон. Бетон прекрасно сопротивляется сжатию, но плохо растяжению. Стальная арматура внутри бетона компенсирует его неспособность сопротивляться сжатию, формируя тем самым новые, уникальные свойства. Современные композиты изготавливаются из других материалов: полимеры, керамика, стеклянные и углеродные волокна, но данный принцип сохраняется. У такого подхода есть и недостаток: даже если мы знаем характеристики исходных компонентов, определить характеристики композита, состоящего из этих компонентов, достаточно проблематично. Для решения этой проблемы есть два пути: физические испытания образцов материалов, или прогнозирование характеристик. Суть прогнозирования заключается в симуляции представительного элемента объема композита, на основе данных о характеристиках входящих компонентов (связующего и армирующего компонента).
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import sklearn
import itertools
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression, LogisticRegressionCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error, accuracy_score, roc_curve
from sklearn.model_selection import train_test_split, cross_val_score
from pandas import DataFrame
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Input
from tensorflow.keras import layers
from sklearn.model_selection import GridSearchCV
df_bp=pd.read_excel('X_bp.xlsx', index_col=0)
df_bp
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | |
|---|---|---|---|---|---|---|---|---|---|---|
| 0.0 | 1.857143 | 2030.000000 | 738.736842 | 30.000000 | 22.267857 | 100.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 |
| 1.0 | 1.857143 | 2030.000000 | 738.736842 | 50.000000 | 23.750000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 |
| 2.0 | 1.857143 | 2030.000000 | 738.736842 | 49.900000 | 33.000000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 |
| 3.0 | 1.857143 | 2030.000000 | 738.736842 | 129.000000 | 21.250000 | 300.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 |
| 4.0 | 2.771331 | 2030.000000 | 753.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1018.0 | 2.271346 | 1952.087902 | 912.855545 | 86.992183 | 20.123249 | 324.774576 | 209.198700 | 73.090961 | 2387.292495 | 125.007669 |
| 1019.0 | 3.444022 | 2050.089171 | 444.732634 | 145.981978 | 19.599769 | 254.215401 | 350.660830 | 72.920827 | 2360.392784 | 117.730099 |
| 1020.0 | 3.280604 | 1972.372865 | 416.836524 | 110.533477 | 23.957502 | 248.423047 | 740.142791 | 74.734344 | 2662.906040 | 236.606764 |
| 1021.0 | 3.705351 | 2066.799773 | 741.475517 | 141.397963 | 19.246945 | 275.779840 | 641.468152 | 74.042708 | 2071.715856 | 197.126067 |
| 1022.0 | 3.808020 | 1890.413468 | 417.316232 | 129.183416 | 27.474763 | 300.952708 | 758.747882 | 74.309704 | 2856.328932 | 194.754342 |
1023 rows × 10 columns
df_nup=pd.read_excel('X_nup.xlsx', index_col=0)
df_nup
| Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|
| 0.0 | 0.0 | 4.000000 | 57.000000 |
| 1.0 | 0.0 | 4.000000 | 60.000000 |
| 2.0 | 0.0 | 4.000000 | 70.000000 |
| 3.0 | 0.0 | 5.000000 | 47.000000 |
| 4.0 | 0.0 | 5.000000 | 57.000000 |
| ... | ... | ... | ... |
| 1035.0 | 90.0 | 8.088111 | 47.759177 |
| 1036.0 | 90.0 | 7.619138 | 66.931932 |
| 1037.0 | 90.0 | 9.800926 | 72.858286 |
| 1038.0 | 90.0 | 10.079859 | 65.519479 |
| 1039.0 | 90.0 | 9.021043 | 66.920143 |
1040 rows × 3 columns
df = df_bp.merge(df_nup, left_index=True,right_index=True, how='inner')
df
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.0 | 1.857143 | 2030.000000 | 738.736842 | 30.000000 | 22.267857 | 100.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 57.000000 |
| 1.0 | 1.857143 | 2030.000000 | 738.736842 | 50.000000 | 23.750000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 60.000000 |
| 2.0 | 1.857143 | 2030.000000 | 738.736842 | 49.900000 | 33.000000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 70.000000 |
| 3.0 | 1.857143 | 2030.000000 | 738.736842 | 129.000000 | 21.250000 | 300.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 47.000000 |
| 4.0 | 2.771331 | 2030.000000 | 753.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 57.000000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1018.0 | 2.271346 | 1952.087902 | 912.855545 | 86.992183 | 20.123249 | 324.774576 | 209.198700 | 73.090961 | 2387.292495 | 125.007669 | 90.0 | 9.076380 | 47.019770 |
| 1019.0 | 3.444022 | 2050.089171 | 444.732634 | 145.981978 | 19.599769 | 254.215401 | 350.660830 | 72.920827 | 2360.392784 | 117.730099 | 90.0 | 10.565614 | 53.750790 |
| 1020.0 | 3.280604 | 1972.372865 | 416.836524 | 110.533477 | 23.957502 | 248.423047 | 740.142791 | 74.734344 | 2662.906040 | 236.606764 | 90.0 | 4.161154 | 67.629684 |
| 1021.0 | 3.705351 | 2066.799773 | 741.475517 | 141.397963 | 19.246945 | 275.779840 | 641.468152 | 74.042708 | 2071.715856 | 197.126067 | 90.0 | 6.313201 | 58.261074 |
| 1022.0 | 3.808020 | 1890.413468 | 417.316232 | 129.183416 | 27.474763 | 300.952708 | 758.747882 | 74.309704 | 2856.328932 | 194.754342 | 90.0 | 6.078902 | 77.434468 |
1023 rows × 13 columns
df.describe()
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 | 1023.000000 |
| mean | 2.930366 | 1975.734888 | 739.923233 | 110.570769 | 22.244390 | 285.882151 | 482.731833 | 73.328571 | 2466.922843 | 218.423144 | 44.252199 | 6.899222 | 57.153929 |
| std | 0.913222 | 73.729231 | 330.231581 | 28.295911 | 2.406301 | 40.943260 | 281.314690 | 3.118983 | 485.628006 | 59.735931 | 45.015793 | 2.563467 | 12.350969 |
| min | 0.389403 | 1731.764635 | 2.436909 | 17.740275 | 14.254985 | 100.000000 | 0.603740 | 64.054061 | 1036.856605 | 33.803026 | 0.000000 | 0.000000 | 0.000000 |
| 25% | 2.317887 | 1924.155467 | 500.047452 | 92.443497 | 20.608034 | 259.066528 | 266.816645 | 71.245018 | 2135.850448 | 179.627520 | 0.000000 | 5.080033 | 49.799212 |
| 50% | 2.906878 | 1977.621657 | 739.664328 | 110.564840 | 22.230744 | 285.896812 | 451.864365 | 73.268805 | 2459.524526 | 219.198882 | 0.000000 | 6.916144 | 57.341920 |
| 75% | 3.552660 | 2021.374375 | 961.812526 | 129.730366 | 23.961934 | 313.002106 | 693.225017 | 75.356612 | 2767.193119 | 257.481724 | 90.000000 | 8.586293 | 64.944961 |
| max | 5.591742 | 2207.773481 | 1911.536477 | 198.953207 | 33.000000 | 413.273418 | 1399.542362 | 82.682051 | 3848.436732 | 414.590628 | 90.000000 | 14.440522 | 103.988901 |
df.duplicated().sum()
0
df.info()
<class 'pandas.core.frame.DataFrame'> Float64Index: 1023 entries, 0.0 to 1022.0 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Соотношение матрица-наполнитель 1023 non-null float64 1 Плотность, кг/м3 1023 non-null float64 2 модуль упругости, ГПа 1023 non-null float64 3 Количество отвердителя, м.% 1023 non-null float64 4 Содержание эпоксидных групп,%_2 1023 non-null float64 5 Температура вспышки, С_2 1023 non-null float64 6 Поверхностная плотность, г/м2 1023 non-null float64 7 Модуль упругости при растяжении, ГПа 1023 non-null float64 8 Прочность при растяжении, МПа 1023 non-null float64 9 Потребление смолы, г/м2 1023 non-null float64 10 Угол нашивки, град 1023 non-null float64 11 Шаг нашивки 1023 non-null float64 12 Плотность нашивки 1023 non-null float64 dtypes: float64(13) memory usage: 111.9 KB
for col in df.columns:
plt.figure(figsize=(15, 5))
plt.title("Гистограмма "+str(col))
sns.histplot(data=df[col])
plt.show()
for col in df.columns:
sns.catplot(y=col, data=df, kind='box')
plt.title(col, fontsize=15)
cols = df.columns
g = sns.PairGrid(df[cols])
g.map(sns.scatterplot)
<seaborn.axisgrid.PairGrid at 0x2c0ae885ee0>
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
# Создаем полотно для отображения большого графика
f, ax = plt.subplots(figsize=(15, 15))
# Создаем цветовую палитру
cmap = sns.diverging_palette(15, 3, as_cmap=True)
# Визуализируем данные кореляции
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
<AxesSubplot:>
#Составим список признаков, у которых более 95% строк содержат одно и то же значение.
num_rows = len(df.index)
low_information_cols = [] #
for col in df.columns:
cnts = df[col].value_counts(dropna=False)
top_pct = (cnts/num_rows).iloc[0]
if top_pct > 0.95:
low_information_cols.append(col)
print('{0}: {1:.5f}%'.format(col, top_pct*100))
print(cnts)
print()
# очистим данные от выбросов
for x in df.columns:
q75,q25 = np.percentile(df.loc[:,x],[75,25])
intr_qr = q75-q25
max = q75+(1.5*intr_qr)
min = q25-(1.5*intr_qr)
df.loc[df[x] < min,x] = np.nan
df.loc[df[x] > max,x] = np.nan
# проверим какое количество выбросов по каждому столбцу
df.isnull().sum()
Соотношение матрица-наполнитель 6 Плотность, кг/м3 9 модуль упругости, ГПа 2 Количество отвердителя, м.% 14 Содержание эпоксидных групп,%_2 2 Температура вспышки, С_2 8 Поверхностная плотность, г/м2 2 Модуль упругости при растяжении, ГПа 6 Прочность при растяжении, МПа 11 Потребление смолы, г/м2 8 Угол нашивки, град 0 Шаг нашивки 4 Плотность нашивки 21 dtype: int64
# количество выбросов довольно мало, можно просто удалить эти строки
df = df.dropna(axis = 0)
# повторно анализируем диаграммы размаха
for col in df_bp.columns:
sns.catplot(y=col, data=df_bp, kind='box')
plt.title(col, fontsize=15)
scaler = MinMaxScaler()
norm_df=pd.DataFrame(scaler.fit_transform(df),
columns=df.columns, index=df.index)
norm_df.describe()
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 |
| mean | 0.498933 | 0.502695 | 0.446764 | 0.504664 | 0.491216 | 0.516059 | 0.373733 | 0.488647 | 0.495706 | 0.521141 | 0.511752 | 0.502232 | 0.513776 |
| std | 0.187489 | 0.187779 | 0.199583 | 0.188865 | 0.180620 | 0.190624 | 0.217078 | 0.191466 | 0.188915 | 0.195781 | 0.500129 | 0.183258 | 0.191342 |
| min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 25% | 0.372274 | 0.368517 | 0.301243 | 0.376190 | 0.367716 | 0.386128 | 0.205619 | 0.359024 | 0.365149 | 0.392067 | 0.000000 | 0.372211 | 0.390482 |
| 50% | 0.494538 | 0.511229 | 0.447061 | 0.506040 | 0.489382 | 0.515980 | 0.354161 | 0.485754 | 0.491825 | 0.523766 | 1.000000 | 0.504258 | 0.516029 |
| 75% | 0.629204 | 0.624999 | 0.580446 | 0.637978 | 0.623410 | 0.646450 | 0.538683 | 0.615077 | 0.612874 | 0.652447 | 1.000000 | 0.624604 | 0.638842 |
| max | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 |
norm_df.describe()
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 | 936.000000 |
| mean | 0.498933 | 0.502695 | 0.446764 | 0.504664 | 0.491216 | 0.516059 | 0.373733 | 0.488647 | 0.495706 | 0.521141 | 0.511752 | 0.502232 | 0.513776 |
| std | 0.187489 | 0.187779 | 0.199583 | 0.188865 | 0.180620 | 0.190624 | 0.217078 | 0.191466 | 0.188915 | 0.195781 | 0.500129 | 0.183258 | 0.191342 |
| min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 25% | 0.372274 | 0.368517 | 0.301243 | 0.376190 | 0.367716 | 0.386128 | 0.205619 | 0.359024 | 0.365149 | 0.392067 | 0.000000 | 0.372211 | 0.390482 |
| 50% | 0.494538 | 0.511229 | 0.447061 | 0.506040 | 0.489382 | 0.515980 | 0.354161 | 0.485754 | 0.491825 | 0.523766 | 1.000000 | 0.504258 | 0.516029 |
| 75% | 0.629204 | 0.624999 | 0.580446 | 0.637978 | 0.623410 | 0.646450 | 0.538683 | 0.615077 | 0.612874 | 0.652447 | 1.000000 | 0.624604 | 0.638842 |
| max | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 |
df
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.0 | 1.857143 | 2030.000000 | 738.736842 | 50.000000 | 23.750000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 60.000000 |
| 3.0 | 1.857143 | 2030.000000 | 738.736842 | 129.000000 | 21.250000 | 300.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 47.000000 |
| 4.0 | 2.771331 | 2030.000000 | 753.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 57.000000 |
| 5.0 | 2.767918 | 2000.000000 | 748.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 60.000000 |
| 6.0 | 2.569620 | 1910.000000 | 807.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 70.000000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1018.0 | 2.271346 | 1952.087902 | 912.855545 | 86.992183 | 20.123249 | 324.774576 | 209.198700 | 73.090961 | 2387.292495 | 125.007669 | 90.0 | 9.076380 | 47.019770 |
| 1019.0 | 3.444022 | 2050.089171 | 444.732634 | 145.981978 | 19.599769 | 254.215401 | 350.660830 | 72.920827 | 2360.392784 | 117.730099 | 90.0 | 10.565614 | 53.750790 |
| 1020.0 | 3.280604 | 1972.372865 | 416.836524 | 110.533477 | 23.957502 | 248.423047 | 740.142791 | 74.734344 | 2662.906040 | 236.606764 | 90.0 | 4.161154 | 67.629684 |
| 1021.0 | 3.705351 | 2066.799773 | 741.475517 | 141.397963 | 19.246945 | 275.779840 | 641.468152 | 74.042708 | 2071.715856 | 197.126067 | 90.0 | 6.313201 | 58.261074 |
| 1022.0 | 3.808020 | 1890.413468 | 417.316232 | 129.183416 | 27.474763 | 300.952708 | 758.747882 | 74.309704 | 2856.328932 | 194.754342 | 90.0 | 6.078902 | 77.434468 |
936 rows × 13 columns
target = norm_df['Прочность при растяжении, МПа']
train = norm_df[[ 'модуль упругости, ГПа']]
cols=['Прочность при растяжении, МПа','модуль упругости, ГПа']
g = sns.PairGrid(df[cols])
g.map(sns.scatterplot)
<seaborn.axisgrid.PairGrid at 0x2c0b6f216a0>
X = df.copy()
X
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.0 | 1.857143 | 2030.000000 | 738.736842 | 50.000000 | 23.750000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 60.000000 |
| 3.0 | 1.857143 | 2030.000000 | 738.736842 | 129.000000 | 21.250000 | 300.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 47.000000 |
| 4.0 | 2.771331 | 2030.000000 | 753.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 57.000000 |
| 5.0 | 2.767918 | 2000.000000 | 748.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 60.000000 |
| 6.0 | 2.569620 | 1910.000000 | 807.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 70.000000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1018.0 | 2.271346 | 1952.087902 | 912.855545 | 86.992183 | 20.123249 | 324.774576 | 209.198700 | 73.090961 | 2387.292495 | 125.007669 | 90.0 | 9.076380 | 47.019770 |
| 1019.0 | 3.444022 | 2050.089171 | 444.732634 | 145.981978 | 19.599769 | 254.215401 | 350.660830 | 72.920827 | 2360.392784 | 117.730099 | 90.0 | 10.565614 | 53.750790 |
| 1020.0 | 3.280604 | 1972.372865 | 416.836524 | 110.533477 | 23.957502 | 248.423047 | 740.142791 | 74.734344 | 2662.906040 | 236.606764 | 90.0 | 4.161154 | 67.629684 |
| 1021.0 | 3.705351 | 2066.799773 | 741.475517 | 141.397963 | 19.246945 | 275.779840 | 641.468152 | 74.042708 | 2071.715856 | 197.126067 | 90.0 | 6.313201 | 58.261074 |
| 1022.0 | 3.808020 | 1890.413468 | 417.316232 | 129.183416 | 27.474763 | 300.952708 | 758.747882 | 74.309704 | 2856.328932 | 194.754342 | 90.0 | 6.078902 | 77.434468 |
936 rows × 13 columns
from sklearn.model_selection import train_test_split
Xtrain, Xtest, Ytrain, Ytest = train_test_split(train, target, test_size=0.3, random_state=1)
Xtrain
| модуль упругости, ГПа | |
|---|---|
| 775.0 | 0.301010 |
| 370.0 | 0.483233 |
| 50.0 | 0.243451 |
| 311.0 | 0.326373 |
| 868.0 | 0.512460 |
| ... | ... |
| 846.0 | 0.677026 |
| 78.0 | 0.386467 |
| 994.0 | 0.462350 |
| 263.0 | 0.493137 |
| 40.0 | 0.275313 |
655 rows × 1 columns
from sklearn.linear_model import LinearRegression, LogisticRegression
lr = LinearRegression()
lin_reg_mod = LinearRegression()
lin_reg_mod.fit(Xtrain, Ytrain)
pred1 = lin_reg_mod.predict(Xtrain)
pred2 = lin_reg_mod.predict(Xtest)
from sklearn.metrics import mean_absolute_error, mean_squared_error, median_absolute_error, r2_score
print('MSE train: {:.3f}, test: {:.3f}'.format(
mean_squared_error(Ytrain, pred1),
mean_squared_error(Ytest, pred2)))
print('R^2 train: {:.3f}, test: {:.3f}'.format(
r2_score(Ytrain, pred1),
r2_score(Ytest, pred2)))
MSE train: 0.036, test: 0.035 R^2 train: 0.001, test: -0.008
all_accuracies = cross_val_score(estimator=lr, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.019900774405664623
knn = KNeighborsRegressor(n_neighbors=5)
knn.fit(Xtrain, Ytrain)
y_pred_knn = knn.predict(Xtest)
print("Оценка R2 к-ближайших соседей:", r2_score(Ytest, y_pred_knn))
Оценка R2 к-ближайших соседей: -0.2809276120596824
all_accuracies = cross_val_score(estimator=knn, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.2397751063721448
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor()
model = RandomForestRegressor(n_estimators=1500, max_depth=10)
model.fit(Xtrain, Ytrain)
y_pred_forest = model.predict(Xtest)
max_features: ['auto','sqrt','log2']
print('R^2 test: {:.3f}'. format(r2_score(Ytest, y_pred_forest)))
R^2 test: -0.166
all_accuracies = cross_val_score(estimator=rfr, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.5394410777074888
Xtrain, Xtest, Ytrain, Ytest = train_test_split(train, target, test_size=0.3, random_state=1)
Xtrain
| модуль упругости, ГПа | |
|---|---|
| 775.0 | 0.301010 |
| 370.0 | 0.483233 |
| 50.0 | 0.243451 |
| 311.0 | 0.326373 |
| 868.0 | 0.512460 |
| ... | ... |
| 846.0 | 0.677026 |
| 78.0 | 0.386467 |
| 994.0 | 0.462350 |
| 263.0 | 0.493137 |
| 40.0 | 0.275313 |
655 rows × 1 columns
print("Размер тренировочного датасета на входе:", Xtrain.shape)
print("Размер тестового датасета на входе:", Xtest.shape)
print("Размер тренировочного датасета на выходе:", Ytrain.shape)
print("Размертестового датасета на выходе:", Ytest.shape)
Размер тренировочного датасета на входе: (655, 1) Размер тестового датасета на входе: (281, 1) Размер тренировочного датасета на выходе: (655,) Размертестового датасета на выходе: (281,)
lr = LinearRegression()
lr.fit(Xtrain, Ytrain)
y_pred = lr.predict(Xtest)
print("Оценка R2 линейная регрессия:", r2_score(Ytest, y_pred))
Оценка R2 линейная регрессия: -0.008008096662396769
all_accuracies = cross_val_score(estimator=lr, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.019900774405664623
knn = KNeighborsRegressor(n_neighbors=5)
knn.fit(Xtrain, Ytrain)
y_pred_knn = knn.predict(Xtest)
print("Оценка R2 к-ближайших соседей:", r2_score(Ytest, y_pred_knn))
Оценка R2 к-ближайших соседей: -0.2809276120596824
all_accuracies = cross_val_score(estimator=knn, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.2397751063721448
model = RandomForestRegressor(n_estimators=2000, max_depth=5)
model.fit(Xtrain, Ytrain)
y_pred_forest = model.predict(Xtest)
print('R^2 test: {:.3f}'. format(r2_score(Ytest, y_pred_forest)))
R^2 test: -0.028
all_accuracies = cross_val_score(estimator=rfr, X=Xtrain, y=Ytrain, cv=10)
print(all_accuracies.mean())
-0.548077758704397
from sklearn.model_selection import GridSearchCV
random_forest_tuning = RandomForestRegressor(random_state = 30)
param_grid = {
'n_estimators': [100, 200, 500,700,900],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [2,3,],
'criterion' :['squared_error']
}
GSCV = GridSearchCV(estimator=random_forest_tuning, param_grid=param_grid, cv=5, verbose=2)
GSCV.fit(Xtrain, Ytrain)
GSCV.best_params_
Fitting 5 folds for each of 30 candidates, totalling 150 fits [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=2, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=auto, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=sqrt, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=100; total time= 0.0s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=200; total time= 0.1s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=500; total time= 0.3s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=700; total time= 0.5s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=900; total time= 0.8s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=900; total time= 0.8s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=900; total time= 0.7s [CV] END criterion=squared_error, max_depth=3, max_features=log2, n_estimators=900; total time= 0.8s
{'criterion': 'squared_error',
'max_depth': 2,
'max_features': 'auto',
'n_estimators': 200}
rf = GSCV.best_estimator_
rf.fit(Xtrain, Ytrain)
RandomForestRegressor(max_depth=2, n_estimators=200, random_state=30)
rf.predict(Xtest)
array([0.49937237, 0.51721498, 0.51721498, 0.50379367, 0.51299246,
0.52611057, 0.51430736, 0.49776982, 0.48724366, 0.49407495,
0.49814568, 0.51351549, 0.49863203, 0.51181923, 0.51315862,
0.49911694, 0.5019478 , 0.5090705 , 0.49911694, 0.51493776,
0.51299246, 0.50855023, 0.51210569, 0.51721498, 0.49806138,
0.51138032, 0.46948417, 0.51250217, 0.45023305, 0.45310457,
0.49776982, 0.51105479, 0.51243288, 0.49785246, 0.51721498,
0.50760882, 0.44862968, 0.50399254, 0.45310457, 0.51240961,
0.51721498, 0.51581007, 0.51721498, 0.48825309, 0.49687873,
0.49482606, 0.51315862, 0.50400678, 0.5190723 , 0.49830911,
0.49994347, 0.51315862, 0.49994347, 0.51250217, 0.4692275 ,
0.44939456, 0.49512647, 0.51243698, 0.51318389, 0.49893996,
0.49785246, 0.51430736, 0.45235154, 0.49512647, 0.5146737 ,
0.44472791, 0.48596412, 0.51316999, 0.51430736, 0.46881999,
0.50133668, 0.51206357, 0.48888741, 0.51181923, 0.5190723 ,
0.51320038, 0.51181923, 0.46948417, 0.50186822, 0.51243698,
0.51243288, 0.51721498, 0.51206357, 0.50855023, 0.50399254,
0.51430736, 0.49806138, 0.44566076, 0.51299246, 0.46182183,
0.51181923, 0.51243698, 0.47859821, 0.46948417, 0.44939456,
0.51430736, 0.44472791, 0.44777585, 0.49333712, 0.51243288,
0.51493776, 0.51240961, 0.51689932, 0.51206357, 0.48332848,
0.51493776, 0.5019478 , 0.51299246, 0.44939456, 0.51181923,
0.49994347, 0.50756467, 0.51293218, 0.51351549, 0.49407495,
0.49994347, 0.51210569, 0.44939456, 0.51275006, 0.44538118,
0.51318389, 0.51243288, 0.51609429, 0.51243288, 0.49823889,
0.51430736, 0.51250217, 0.44426185, 0.50328828, 0.50305909,
0.51243698, 0.51105479, 0.51181923, 0.44825103, 0.51947087,
0.51240961, 0.49753716, 0.51240961, 0.51240961, 0.51721498,
0.51181923, 0.49370656, 0.51243698, 0.51721498, 0.51138032,
0.49020005, 0.51947087, 0.44538118, 0.51243698, 0.51721498,
0.49826836, 0.51315862, 0.48724366, 0.51240961, 0.44472791,
0.51689932, 0.51206357, 0.51721498, 0.46182183, 0.51181923,
0.51721498, 0.46948417, 0.50471445, 0.51706577, 0.46948417,
0.51243698, 0.51430736, 0.51430736, 0.51320038, 0.50400678,
0.48332848, 0.44426185, 0.51706577, 0.51689932, 0.44328962,
0.47859821, 0.49863203, 0.49826836, 0.50855023, 0.51243698,
0.49863203, 0.51181923, 0.51721498, 0.4896781 , 0.51721498,
0.50443437, 0.49823889, 0.50760882, 0.46948417, 0.49806138,
0.51181923, 0.51293218, 0.44889125, 0.51609429, 0.51659114,
0.51315862, 0.51320038, 0.51206357, 0.51721498, 0.51581007,
0.49863203, 0.51581007, 0.5190723 , 0.51250217, 0.51250217,
0.48666789, 0.49806138, 0.4896781 , 0.5053085 , 0.47859821,
0.51430736, 0.50855023, 0.49994347, 0.51240961, 0.49333712,
0.51320067, 0.4896781 , 0.45040466, 0.51299246, 0.49994347,
0.48965645, 0.51583716, 0.49595718, 0.49773783, 0.49806138,
0.51293218, 0.5019478 , 0.51140265, 0.5019478 , 0.49776982,
0.49338115, 0.51947087, 0.49407495, 0.51299246, 0.51181923,
0.51430736, 0.51721498, 0.49806138, 0.51318389, 0.51299246,
0.44580474, 0.49512647, 0.51609429, 0.51493776, 0.46948417,
0.5044319 , 0.48724366, 0.4896781 , 0.48429615, 0.51210569,
0.51545894, 0.5190723 , 0.51243288, 0.49826836, 0.51721498,
0.51430736, 0.49863203, 0.51493776, 0.45310457, 0.51243698,
0.49806138, 0.51659114, 0.52503191, 0.5044319 , 0.48017606,
0.4692275 , 0.51430736, 0.51299246, 0.50855023, 0.49863203,
0.51430736, 0.49298072, 0.45040466, 0.51243288, 0.49753716,
0.51181923, 0.49863203, 0.57733241, 0.49806138, 0.51299246,
0.49613043])
Ytest
431.0 0.140630
45.0 0.158646
803.0 0.583773
674.0 0.328720
38.0 0.606789
...
608.0 0.543305
487.0 0.499028
62.0 0.655680
399.0 0.649320
652.0 0.539078
Name: Прочность при растяжении, МПа, Length: 281, dtype: float64
np.mean(np.abs(Ytest-rf.predict(Xtest)))
0.15034493944156616
np.mean(np.abs(Ytest-np.mean(Ytest)))
0.1488040656405075
df
| Соотношение матрица-наполнитель | Плотность, кг/м3 | модуль упругости, ГПа | Количество отвердителя, м.% | Содержание эпоксидных групп,%_2 | Температура вспышки, С_2 | Поверхностная плотность, г/м2 | Модуль упругости при растяжении, ГПа | Прочность при растяжении, МПа | Потребление смолы, г/м2 | Угол нашивки, град | Шаг нашивки | Плотность нашивки | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.0 | 1.857143 | 2030.000000 | 738.736842 | 50.000000 | 23.750000 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 4.000000 | 60.000000 |
| 3.0 | 1.857143 | 2030.000000 | 738.736842 | 129.000000 | 21.250000 | 300.000000 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 47.000000 |
| 4.0 | 2.771331 | 2030.000000 | 753.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 57.000000 |
| 5.0 | 2.767918 | 2000.000000 | 748.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 60.000000 |
| 6.0 | 2.569620 | 1910.000000 | 807.000000 | 111.860000 | 22.267857 | 284.615385 | 210.000000 | 70.000000 | 3000.000000 | 220.000000 | 0.0 | 5.000000 | 70.000000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1018.0 | 2.271346 | 1952.087902 | 912.855545 | 86.992183 | 20.123249 | 324.774576 | 209.198700 | 73.090961 | 2387.292495 | 125.007669 | 90.0 | 9.076380 | 47.019770 |
| 1019.0 | 3.444022 | 2050.089171 | 444.732634 | 145.981978 | 19.599769 | 254.215401 | 350.660830 | 72.920827 | 2360.392784 | 117.730099 | 90.0 | 10.565614 | 53.750790 |
| 1020.0 | 3.280604 | 1972.372865 | 416.836524 | 110.533477 | 23.957502 | 248.423047 | 740.142791 | 74.734344 | 2662.906040 | 236.606764 | 90.0 | 4.161154 | 67.629684 |
| 1021.0 | 3.705351 | 2066.799773 | 741.475517 | 141.397963 | 19.246945 | 275.779840 | 641.468152 | 74.042708 | 2071.715856 | 197.126067 | 90.0 | 6.313201 | 58.261074 |
| 1022.0 | 3.808020 | 1890.413468 | 417.316232 | 129.183416 | 27.474763 | 300.952708 | 758.747882 | 74.309704 | 2856.328932 | 194.754342 | 90.0 | 6.078902 | 77.434468 |
936 rows × 13 columns
Y = norm_df[['Соотношение матрица-наполнитель']]
X = norm_df.drop(['Соотношение матрица-наполнитель'], axis=1)
Xtrain, Xholdout, Ytrain, Yholdout = train_test_split(X, Y, test_size=0.3, random_state=17)
print(Xtrain.shape, Xholdout.shape)
(655, 12) (281, 12)
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(np.array(X))
print(normalizer.mean.numpy())
[[0.5026951 0.4467639 0.5046643 0.49121633 0.5160587 0.37373248 0.48864686 0.49570566 0.5211413 0.5117521 0.5022317 0.5137764 ]]
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Эпоха')
plt.ylabel('MAE')
plt.legend()
plt.grid(True)
def build_and_compile_model(norm):
model = keras.Sequential([
norm,
layers.Dense(40, activation='relu'),
layers.Dense(40, activation='relu'),
layers.Dense(2)
])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
normalization (Normalizatio (None, 12) 25
n)
dense (Dense) (None, 40) 520
dense_1 (Dense) (None, 40) 1640
dense_2 (Dense) (None, 2) 82
=================================================================
Total params: 2,267
Trainable params: 2,242
Non-trainable params: 25
_________________________________________________________________
%%time
history = dnn_model.fit(
Xtrain.values,
Ytrain.values,
epochs=100,
verbose=1,
validation_split = 0.2)
Epoch 1/100 17/17 [==============================] - 0s 7ms/step - loss: 0.4584 - val_loss: 0.3421 Epoch 2/100 17/17 [==============================] - 0s 2ms/step - loss: 0.2713 - val_loss: 0.2670 Epoch 3/100 17/17 [==============================] - 0s 2ms/step - loss: 0.2177 - val_loss: 0.2258 Epoch 4/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1915 - val_loss: 0.2086 Epoch 5/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1784 - val_loss: 0.2023 Epoch 6/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1701 - val_loss: 0.1967 Epoch 7/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1630 - val_loss: 0.1953 Epoch 8/100 17/17 [==============================] - 0s 1ms/step - loss: 0.1574 - val_loss: 0.1929 Epoch 9/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1526 - val_loss: 0.1906 Epoch 10/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1490 - val_loss: 0.1876 Epoch 11/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1451 - val_loss: 0.1862 Epoch 12/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1422 - val_loss: 0.1864 Epoch 13/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1383 - val_loss: 0.1843 Epoch 14/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1364 - val_loss: 0.1829 Epoch 15/100 17/17 [==============================] - 0s 1ms/step - loss: 0.1320 - val_loss: 0.1826 Epoch 16/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1309 - val_loss: 0.1812 Epoch 17/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1281 - val_loss: 0.1810 Epoch 18/100 17/17 [==============================] - 0s 1ms/step - loss: 0.1278 - val_loss: 0.1800 Epoch 19/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1268 - val_loss: 0.1790 Epoch 20/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1247 - val_loss: 0.1792 Epoch 21/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1234 - val_loss: 0.1756 Epoch 22/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1205 - val_loss: 0.1762 Epoch 23/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1170 - val_loss: 0.1753 Epoch 24/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1152 - val_loss: 0.1740 Epoch 25/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1143 - val_loss: 0.1741 Epoch 26/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1121 - val_loss: 0.1732 Epoch 27/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1113 - val_loss: 0.1729 Epoch 28/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1087 - val_loss: 0.1745 Epoch 29/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1095 - val_loss: 0.1714 Epoch 30/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1067 - val_loss: 0.1717 Epoch 31/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1061 - val_loss: 0.1707 Epoch 32/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1030 - val_loss: 0.1719 Epoch 33/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1024 - val_loss: 0.1688 Epoch 34/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1014 - val_loss: 0.1696 Epoch 35/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1041 - val_loss: 0.1712 Epoch 36/100 17/17 [==============================] - 0s 2ms/step - loss: 0.1020 - val_loss: 0.1704 Epoch 37/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0985 - val_loss: 0.1688 Epoch 38/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0984 - val_loss: 0.1681 Epoch 39/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0960 - val_loss: 0.1709 Epoch 40/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0959 - val_loss: 0.1662 Epoch 41/100 17/17 [==============================] - 0s 5ms/step - loss: 0.0947 - val_loss: 0.1676 Epoch 42/100 17/17 [==============================] - 0s 6ms/step - loss: 0.0934 - val_loss: 0.1690 Epoch 43/100 17/17 [==============================] - 0s 5ms/step - loss: 0.0912 - val_loss: 0.1687 Epoch 44/100 17/17 [==============================] - 0s 5ms/step - loss: 0.0912 - val_loss: 0.1706 Epoch 45/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0933 - val_loss: 0.1720 Epoch 46/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0895 - val_loss: 0.1658 Epoch 47/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0866 - val_loss: 0.1678 Epoch 48/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0882 - val_loss: 0.1629 Epoch 49/100 17/17 [==============================] - 0s 5ms/step - loss: 0.0863 - val_loss: 0.1676 Epoch 50/100 17/17 [==============================] - 0s 4ms/step - loss: 0.0854 - val_loss: 0.1643 Epoch 51/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0827 - val_loss: 0.1685 Epoch 52/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0815 - val_loss: 0.1668 Epoch 53/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0814 - val_loss: 0.1681 Epoch 54/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0819 - val_loss: 0.1652 Epoch 55/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0787 - val_loss: 0.1655 Epoch 56/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0794 - val_loss: 0.1659 Epoch 57/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0800 - val_loss: 0.1658 Epoch 58/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0781 - val_loss: 0.1630 Epoch 59/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0771 - val_loss: 0.1663 Epoch 60/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0743 - val_loss: 0.1639 Epoch 61/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0750 - val_loss: 0.1662 Epoch 62/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0752 - val_loss: 0.1666 Epoch 63/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0741 - val_loss: 0.1661 Epoch 64/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0717 - val_loss: 0.1675 Epoch 65/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0717 - val_loss: 0.1687 Epoch 66/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0700 - val_loss: 0.1695 Epoch 67/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0689 - val_loss: 0.1687 Epoch 68/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0694 - val_loss: 0.1700 Epoch 69/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0684 - val_loss: 0.1691 Epoch 70/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0671 - val_loss: 0.1701 Epoch 71/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0672 - val_loss: 0.1721 Epoch 72/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0659 - val_loss: 0.1736 Epoch 73/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0670 - val_loss: 0.1689 Epoch 74/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0659 - val_loss: 0.1710 Epoch 75/100 17/17 [==============================] - 0s 3ms/step - loss: 0.0646 - val_loss: 0.1708 Epoch 76/100 17/17 [==============================] - 0s 3ms/step - loss: 0.0638 - val_loss: 0.1723 Epoch 77/100 17/17 [==============================] - 0s 4ms/step - loss: 0.0652 - val_loss: 0.1706 Epoch 78/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0612 - val_loss: 0.1713 Epoch 79/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0591 - val_loss: 0.1727 Epoch 80/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0614 - val_loss: 0.1719 Epoch 81/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0595 - val_loss: 0.1676 Epoch 82/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0608 - val_loss: 0.1732 Epoch 83/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0580 - val_loss: 0.1726 Epoch 84/100 17/17 [==============================] - 0s 3ms/step - loss: 0.0578 - val_loss: 0.1733 Epoch 85/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0564 - val_loss: 0.1731 Epoch 86/100 17/17 [==============================] - 0s 4ms/step - loss: 0.0568 - val_loss: 0.1758 Epoch 87/100 17/17 [==============================] - 0s 3ms/step - loss: 0.0566 - val_loss: 0.1735 Epoch 88/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0559 - val_loss: 0.1727 Epoch 89/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0540 - val_loss: 0.1752 Epoch 90/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0527 - val_loss: 0.1731 Epoch 91/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0532 - val_loss: 0.1764 Epoch 92/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0520 - val_loss: 0.1751 Epoch 93/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0536 - val_loss: 0.1723 Epoch 94/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0520 - val_loss: 0.1753 Epoch 95/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0529 - val_loss: 0.1760 Epoch 96/100 17/17 [==============================] - 0s 1ms/step - loss: 0.0526 - val_loss: 0.1770 Epoch 97/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0532 - val_loss: 0.1738 Epoch 98/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0538 - val_loss: 0.1760 Epoch 99/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0498 - val_loss: 0.1747 Epoch 100/100 17/17 [==============================] - 0s 2ms/step - loss: 0.0494 - val_loss: 0.1800 Wall time: 3.75 s
plot_loss(history)
dnn_model.evaluate(Xtest, Ytest, verbose=0)
0.22227410972118378
Y = norm_df[['Соотношение матрица-наполнитель']]
X = norm_df.drop(['Соотношение матрица-наполнитель'], axis=1)
Xtrain, Xholdout, Ytrain, Yholdout = train_test_split(X, Y, test_size=0.3, random_state=17)
print(Xtrain.shape, Xholdout.shape)
(655, 12) (281, 12)
normalizer2 = tf.keras.layers.Normalization(axis=-1)
normalizer2.adapt(np.array(X))
print(normalizer2.mean.numpy())
[[0.5026951 0.4467639 0.5046643 0.49121633 0.5160587 0.37373248 0.48864686 0.49570566 0.5211413 0.5117521 0.5022317 0.5137764 ]]
def build_and_compile_model(norm):
model2 = keras.Sequential([
norm,
layers.Dense(300, activation='sigmoid'),
layers.Dropout(0.6),
layers.Dense(20, activation='sigmoid'),
layers.Dense(10, activation='sigmoid'),
layers.Dense(1)
])
model2.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.RMSprop(0.001))
return model2
dnn_model2 = build_and_compile_model(normalizer2)
dnn_model2.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
normalization_1 (Normalizat (None, 12) 25
ion)
dense_3 (Dense) (None, 300) 3900
dropout (Dropout) (None, 300) 0
dense_4 (Dense) (None, 20) 6020
dense_5 (Dense) (None, 10) 210
dense_6 (Dense) (None, 1) 11
=================================================================
Total params: 10,166
Trainable params: 10,141
Non-trainable params: 25
_________________________________________________________________
%%time
history2 = dnn_model2.fit(
Xtrain,
Ytrain,
validation_split = 0.2,
verbose=0, epochs=50)
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <timed exec> in <module> ~\anaconda3\envs\learning\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~\anaconda3\envs\learning\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 3476 3477 # Do we have a slicer (on rows)? -> 3478 indexer = convert_to_index_sliceable(self, key) 3479 if indexer is not None: 3480 if isinstance(indexer, np.ndarray): ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexing.py in convert_to_index_sliceable(obj, key) 2314 idx = obj.index 2315 if isinstance(key, slice): -> 2316 return idx._convert_slice_indexer(key, kind="getitem") 2317 2318 elif isinstance(key, str): ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\numeric.py in _convert_slice_indexer(self, key, kind) 275 # We always treat __getitem__ slicing as label-based 276 # translate to locations --> 277 return self.slice_indexer(key.start, key.stop, key.step) 278 279 return super()._convert_slice_indexer(key, kind=kind) ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\base.py in slice_indexer(self, start, end, step, kind) 6272 self._deprecated_arg(kind, "kind", "slice_indexer") 6273 -> 6274 start_slice, end_slice = self.slice_locs(start, end, step=step) 6275 6276 # return a slice ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\base.py in slice_locs(self, start, end, step, kind) 6482 start_slice = None 6483 if start is not None: -> 6484 start_slice = self.get_slice_bound(start, "left") 6485 if start_slice is None: 6486 start_slice = 0 ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\base.py in get_slice_bound(self, label, side, kind) 6401 except ValueError: 6402 # raise the original KeyError -> 6403 raise err 6404 6405 if isinstance(slc, np.ndarray): ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\base.py in get_slice_bound(self, label, side, kind) 6395 # we need to look up the label 6396 try: -> 6397 slc = self.get_loc(label) 6398 except KeyError as err: 6399 try: ~\anaconda3\envs\learning\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3621 return self._engine.get_loc(casted_key) 3622 except KeyError as err: -> 3623 raise KeyError(key) from err 3624 except TypeError: 3625 # If we have a listlike key, _check_indexing_error will raise KeyError: 0
plot_loss(history)
dnn_model2.evaluate(Xtest, Ytest, verbose=0)
0.9249774813652039